Cross-Modality Deep Feature Learning for Brain Tumor Segmentation
Background
Recent advances in machine learning and prevalence of digital medical images have opened up an opportunity to address the challenging brain tumor segmentation (BTS) task by using deep convolutional neural networks.
However, different from the RGB image data that are very widespread, the medical image data used in brain tumor segmentation are relatively scarce in terms of the data scale but contain the richer information in terms of the modality property.
Proposed Framework
To this end, this paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data. The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
The proposed cross-modality deep feature learning framework consists of two learning processes:
1. Cross-Modality Feature Transition (CMFT) Process: Aims at learning rich feature representations by transiting knowledge across different modality data.
2. Cross-Modality Feature Fusion (CMFF) Process: Aims at fusing knowledge from different modality data to enhance the representation capability.
Experimental Results
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance when compared with the baseline methods and state-of-the-art methods.
The cross-modality learning strategy successfully addresses the challenge of limited medical imaging data by leveraging the complementary information across multiple MRI modalities.